Due to the high activation sparsity and use of accumulates (AC) instead of expensive multiply-and-accumulates (MAC), neuromorphic spiking neural networks (SNNs) have emerged as a promising low-power alternative to traditional DNNs for several computer vision (CV) applications. However, most existing SNNs require multiple time steps for acceptable inference accuracy, hindering real-time deployment and increasing spiking activity and, consequently, energy consumption. Recent works proposed direct encoding that directly feeds the analog pixel values in the first layer of the SNN in order to significantly reduce the number of time steps. Although the overhead for the first layer MACs with direct encoding is negligible for deep SNNs and the CV processing is efficient using SNNs, the data transfer between the image sensors and the downstream processing costs significant bandwidth and may dominate the total energy. To mitigate this concern, we propose an in-sensor computing hardware-software co-design framework for SNNs targeting image recognition tasks. Our approach reduces the bandwidth between sensing and processing by 12-96x and the resulting total energy by 2.32x compared to traditional CV processing, with a 3.8% reduction in accuracy on ImageNet.
translated by 谷歌翻译
为了简化图书馆管理的过程,已经采用了许多技术,但其中大多数专注于库存管理。在发行和返回图书馆的发行和返回图书馆的领域,几乎没有任何自动化进展。在大学和学校中,宿舍经常忘记及时将发行的书籍返回图书馆。为了解决上述问题并确保及时提交已发行的书籍,这项工作开发了一个解决这些复杂性的书籍机器人。该机器人可以从A点到B点通勤,扫描并验证QR码和条形码。该机器人将具有一定的有效载荷能力来携带书籍。 QR码和条形码扫描将由PI摄像头,OpenCV和Raspberry Pi启用,从而使书籍交换安全。机器人的探测器操作将通过Blynk应用程序手动控制。本文着重于如何减少人类干预,并在机器人的帮助下自动化图书馆管理系统的问题。
translated by 谷歌翻译
边缘用户的计算和通信功能有限,为大型模型的联合学习(FL)创造了重要的瓶颈。我们考虑了一个现实但较少的跨设备FL设置,在该设置中,没有客户能够培训完整的大型模型,也不愿意与服务器共享任何中间激活。为此,我们提出了主要子模型(PRISM)训练方法,该方法利用模拟低级结构和内核正交性来训练在正交内核空间中的子模型。更具体地说,通过将单数值分解(SVD)应用于服务器模型中的原始内核,Prism首先获得了一组主要的正交核,其中每个内核都通过其单数值权衡。此后,Prism利用我们的新型抽样策略,该策略独立选择主要核的不同子集以为客户创建子模型。重要的是,具有较高的采样概率分配具有较大奇异值的内核。因此,每个子模型都是整个大型模型的低级别近似值,所有客户共同实现了接近全模型的训练。我们在各种资源受限设置中对多个数据集进行的广泛评估表明,与现有替代方案相比,PRISM的性能最高可提高10%,只有20%的子模型培训。
translated by 谷歌翻译
客户的计算和通信能力有限,在资源有限的边缘节点上对联邦学习(FL)提出了重大挑战。解决此问题的一种潜在解决方案是部署现成的稀疏学习算法,该算法在每个客户端对二进制稀疏面膜进行训练,并期望训练一致的稀疏服务器掩码。但是,正如我们在本文中调查的那样,与使用密集的模型相比,这种天真的部署与FL相比,尤其是在低客户资源预算的情况下,其准确性下降了。特别是,我们的调查表明,对客户的训练有素的面具之间存在严重的共识,这阻止了服务器面罩上的收敛,并可能导致模型性能大大下降。基于这样的关键观察,我们提出了联合彩票意识到的稀疏狩猎(Flash),这是一个统一的稀疏学习框架,可以使服务器以稀疏的子模型赢得彩票,从而在高度资源有限的客户设置下可以极大地提高性能。此外,为了解决设备异质性的问题,我们利用我们的发现来提出异性恋,在此,客户可以根据其设备资源限制拥有不同的目标稀疏预算。各种数据集(IID和非IID)上有多个模型的广泛实验评估显示了我们模型的优势,最多可屈服$ \ Mathord {\ sim} 10.1 \%$ $提高精度,$ \ mathord {\ sim} 10.26 \ times与现有替代方案相比,在类似的高参数设置中,沟通成本少于$较少。
translated by 谷歌翻译
独立组件分析是一种无监督的学习方法,用于从多元信号或数据矩阵计算独立组件(IC)。基于权重矩阵与多元数据矩阵的乘法进行评估。这项研究提出了一个新型的Memristor横杆阵列,用于实施ACY ICA和快速ICA,以用于盲源分离。数据输入以脉冲宽度调制电压的形式应用于横梁阵列,并且已实现的神经网络的重量存储在Memristor中。来自Memristor列的输出电荷用于计算重量更新,该重量更新是通过电压高于Memristor SET/RESET电压执行的。为了证明其潜在应用,采用了基于ICA架构的基于ICA架构的拟议的Memristor横杆阵列用于图像源分离问题。实验结果表明,所提出的方法非常有效地分离图像源,并且与常规ACY的基于软件的ACY实施相比,与结构相似性的百分比相比,结构相似性的百分比为67.27%,图像的对比度得到了改进。 ICA和快速ICA算法。
translated by 谷歌翻译
具有混合精度量化的大DNN可以实现超高压缩,同时保持高分类性能。但是,由于找到了可以引导优化过程的准确度量的挑战,与32位浮点(FP-32)基线相比,这些方法牺牲了显着性能,或者依赖于计算昂贵的迭代培训政策这需要预先训练的基线的可用性。要解决此问题,本文提出了BMPQ,一种使用位梯度来分析层敏感性的训练方法,并产生混合精度量化模型。 BMPQ需要单一的训练迭代,但不需要预先训练的基线。它使用整数线性程序(ILP)来动态调整培训期间层的精度,但经过固定的硬件预算。为了评估BMPQ的功效,我们对CiFar-10,CiFar-100和微小想象数据集的VGG16和Reset18进行了广泛的实验。与基线FP-32型号相比,BMPQ可以产生具有15.4倍的参数比特的模型,精度可忽略不计。与SOTA“在培训期间”相比,混合精确训练方案,我们的模型分别在CiFar-10,CiFar-100和微小想象中分别为2.1倍,2.2倍2.9倍,具有提高的精度高达14.54%。
translated by 谷歌翻译
Modern telecom systems are monitored with performance and system logs from multiple application layers and components. Detecting anomalous events from these logs is key to identify security breaches, resource over-utilization, critical/fatal errors, etc. Current supervised log anomaly detection frameworks tend to perform poorly on new types or signatures of anomalies with few or unseen samples in the training data. In this work, we propose a meta-learning-based log anomaly detection framework (LogAnMeta) for detecting anomalies from sequence of log events with few samples. LoganMeta train a hybrid few-shot classifier in an episodic manner. The experimental results demonstrate the efficacy of our proposed method
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
A framework for creating and updating digital twins for dynamical systems from a library of physics-based functions is proposed. The sparse Bayesian machine learning is used to update and derive an interpretable expression for the digital twin. Two approaches for updating the digital twin are proposed. The first approach makes use of both the input and output information from a dynamical system, whereas the second approach utilizes output-only observations to update the digital twin. Both methods use a library of candidate functions representing certain physics to infer new perturbation terms in the existing digital twin model. In both cases, the resulting expressions of updated digital twins are identical, and in addition, the epistemic uncertainties are quantified. In the first approach, the regression problem is derived from a state-space model, whereas in the latter case, the output-only information is treated as a stochastic process. The concepts of It\^o calculus and Kramers-Moyal expansion are being utilized to derive the regression equation. The performance of the proposed approaches is demonstrated using highly nonlinear dynamical systems such as the crack-degradation problem. Numerical results demonstrated in this paper almost exactly identify the correct perturbation terms along with their associated parameters in the dynamical system. The probabilistic nature of the proposed approach also helps in quantifying the uncertainties associated with updated models. The proposed approaches provide an exact and explainable description of the perturbations in digital twin models, which can be directly used for better cyber-physical integration, long-term future predictions, degradation monitoring, and model-agnostic control.
translated by 谷歌翻译
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
translated by 谷歌翻译